skip to main content


Search for: All records

Creators/Authors contains: "Mao, Hanzi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    Abstract—The emergence of remote sensing technologies cou- pled with local monitoring workstations enables us the un- precedented ability to monitor the environment in large scale. Information mining from multi-channel geo-spatiotemporal data however poses great challenges to many computational sustainability applications. Most existing approaches adopt various dimensionality reduction techniques without fully taking advantage of the spatiotemporal nature of the data. In addition, the lack of labeled training data raises another challenge for modeling such data. In this work, we propose a novel semi-supervised attention-based deep representation model that learns context-aware spatiotemporal representations for prediction tasks. A combination of convolutional neural networks with a hybrid attention mechanism is adopted to extract spatial and temporal variations in the geo-spatiotemporal data. Recognizing the importance of capturing more complete temporal dependencies, we propose the hybrid attention mechanism which integrates a learnable global query into the classic self-attention mechanism. To overcome the data scarcity issue, sampled spatial and temporal context that naturally reside in the largely-available unlabeled geo-spatiotemporal data are exploited to aid meaningful representation learning. We conduct experiments on a large-scale real-world crop yield prediction task. The results show that our methods significantly outperforms existing state-of-the-art yield prediction methods, especially under the stress of training data scarcity. 
    more » « less